Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(cli): object add-link: do not allow blocks over BS limit #8414

Merged
merged 6 commits into from
Sep 28, 2021

Conversation

schomatis
Copy link
Contributor

Trying not to be too invasive as this is just an ad-hoc check for a deprected command I have added the check at the CLI level; otherwise I can go deeper in the ObjectAPI.

If the placement of the check is correct will add a test as a sharness. If instead the check should go deeper in the stack then the test should likely be written in Go. Waiting on an initial review to decide course.

Optional additional proposals to the current PR:

  • Add the same check for the (less common) set-data and append-data commands which can potentially go over the BS limit just as well.
  • Add the check to rm-link but only as a warning to flag if the current block being manipulated is already over the limit.

Fixes #7421.

@schomatis schomatis self-assigned this Sep 7, 2021
Copy link
Member

@lidel lidel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Placement should be ok.
Tested with this one-liner that patches until error – works as expected 👍

$ echo QmUNLLsPACCz1vLxQVkXqqLX5R1X345qqfHbsf67hvA3Nn>cid; while ipfs object patch $(<cid) add-link $(<cid).jpg Qmayz4F4UzqcAMitTzU4zCSckDofvxstDuj3y7ajsLLEVs 2>&1 > cid; do; <cid; done
QmcPUbyiR2g9RbTcLQAMHq5S9Fh3gBoXJqso2yK8awEMDC
...
QmTw19KYw2c8w5756s95ccKpFoFhbP6oXa4Zg5cLbLD79N
Error: object API does not support HAMT-sharding. To create big directories, please use the files API (MFS)

Remaining work:

  • add the same check to rm-link and set-data and append-data
    • better to have same behavior everywhere and fail har, and add override flag (below)
  • add --force flag to disable the block size check on impacted commands
    • we need to have an escape hatch to unblock people who need the old behavior, otherwise some collabs won't be able to upgrade.
    • rationale: we want to apply soft pressure, but if someone wants to delay deprecation of ipfs object APIs in their codebase, they should have the ability to do so without the need for forking go-ipfs.
  • add sharness tests (fine to reuse my one-liner)

if err != nil {
return err
}
if modifiedNodeSize > 1024 * 1024 {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My guess would be that returning an error wouldn't suffice. The node is already too big and this has to be reverted. But reverting it here is hard, because the reversal can itself cause another error. So I would think this must be rejected at the ObjectAPI level.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@petar I see two places where we could potentially put this check deeper in the stack:

  • (api *ObjectAPI) AddLink: after the InsertNodeAtPath call (which has the actual 'insertion' logic) using a check similar to the one placed at the CLI level (fetch the modified node and check its size). We might also want to evaluate purging the modified copy from the repo.

  • (deeper) in the go-merkledag repo executing the actual insertion logic: it has the advantage that already has a permanent/temporary store division where rejecting the modified node wouldn't impact the go-ipfs repo. The downside is that we would be injecting an arbitrary restriction on a library based on the limitation of another (bitswap) just because both are consumed by the same go-ipfs logic.

Either is doable so I'll leave to you the decision of where to execute.

@BigLep
Copy link
Contributor

BigLep commented Sep 24, 2021

Per 2021-09-24 verbal, this is a bigger problem and we're going to take care of the immediate problem and not put a check in deeper like the blockstore.

We believe the check is happening at the right level but we should incorporate Petar's feedback to avoid mutating the datastore for the error case.

Check to see if can add a method to the object API that checks for the size and rejects if it's too big before it mutates anything.

If that's a huge pain, we can leave it as GC will take care of unnecessarily added data.

@petar can provide additional comments.

Copy link
Member

@lidel lidel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Let's keep it small and merge it as-is (protect people from creating non-transferrable blocks asap).

Made small changes:

  • renamed override flag to --allow-big-block and made error more explicit about what and why
  • separate tests for default and override behavior

@schomatis is this still a draft, or ready for merge? confirmed this is good to merge to solve the burning problem ASAP.

- renamed override flag to --allow-big-block
- separate tests for default and override behavior
@lidel lidel force-pushed the schomatis/cli/ipfs-object-patch-size-error branch from 5189eeb to 77f420e Compare September 27, 2021 15:00
@schomatis schomatis marked this pull request as ready for review September 28, 2021 13:12
@lidel lidel merged commit 485a5c2 into master Sep 28, 2021
@lidel lidel deleted the schomatis/cli/ipfs-object-patch-size-error branch September 28, 2021 13:27
@aschmahmann aschmahmann mentioned this pull request Dec 1, 2021
80 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

ipfs object patch does not work with HAMT-sharded directories
4 participants